9,287 research outputs found

    Bayesian Grammar Induction for Language Modeling

    Full text link
    We describe a corpus-based induction algorithm for probabilistic context-free grammars. The algorithm employs a greedy heuristic search within a Bayesian framework, and a post-pass using the Inside-Outside algorithm. We compare the performance of our algorithm to n-gram models and the Inside-Outside algorithm in three language modeling tasks. In two of the tasks, the training data is generated by a probabilistic context-free grammar and in both tasks our algorithm outperforms the other techniques. The third task involves naturally-occurring data, and in this task our algorithm does not perform as well as n-gram models but vastly outperforms the Inside-Outside algorithm.Comment: 8 pages, LaTeX, uses aclap.st

    ECONOMIC IMPLICATIONS OF THE FAIR ACT ON U.S. PEANUT PRODUCERS

    Get PDF
    This study analyzed the potential economic impacts of the FAIR Act under GATT and NAFTA on the U.S. peanut industry. Results indicate that the economic impacts of the new program combined with the trade agreements are profound on the peanut industry in both short and long terms. Changes of the peanut program could decrease peanut producers' farm income substantially, eliminate government financial costs related to excessive quotas, and transfer peanut growers' program benefits back to peanut consumers. Increasing imports of foreign peanuts due to free/reduced trade barrier agreements would transfer peanut producers' program benefits to domestic peanut importers and foreign exporters who sell peanuts to the U.S. Note: Tables 3 and 4 not included in machine readable file--contact authors for copies.economic impacts, FAIR Act, peanuts, quota, support price, Agricultural and Food Policy, Crop Production/Industries,

    Safe Mutations for Deep and Recurrent Neural Networks through Output Gradients

    Full text link
    While neuroevolution (evolving neural networks) has a successful track record across a variety of domains from reinforcement learning to artificial life, it is rarely applied to large, deep neural networks. A central reason is that while random mutation generally works in low dimensions, a random perturbation of thousands or millions of weights is likely to break existing functionality, providing no learning signal even if some individual weight changes were beneficial. This paper proposes a solution by introducing a family of safe mutation (SM) operators that aim within the mutation operator itself to find a degree of change that does not alter network behavior too much, but still facilitates exploration. Importantly, these SM operators do not require any additional interactions with the environment. The most effective SM variant capitalizes on the intriguing opportunity to scale the degree of mutation of each individual weight according to the sensitivity of the network's outputs to that weight, which requires computing the gradient of outputs with respect to the weights (instead of the gradient of error, as in conventional deep learning). This safe mutation through gradients (SM-G) operator dramatically increases the ability of a simple genetic algorithm-based neuroevolution method to find solutions in high-dimensional domains that require deep and/or recurrent neural networks (which tend to be particularly brittle to mutation), including domains that require processing raw pixels. By improving our ability to evolve deep neural networks, this new safer approach to mutation expands the scope of domains amenable to neuroevolution

    ES Is More Than Just a Traditional Finite-Difference Approximator

    Full text link
    An evolution strategy (ES) variant based on a simplification of a natural evolution strategy recently attracted attention because it performs surprisingly well in challenging deep reinforcement learning domains. It searches for neural network parameters by generating perturbations to the current set of parameters, checking their performance, and moving in the aggregate direction of higher reward. Because it resembles a traditional finite-difference approximation of the reward gradient, it can naturally be confused with one. However, this ES optimizes for a different gradient than just reward: It optimizes for the average reward of the entire population, thereby seeking parameters that are robust to perturbation. This difference can channel ES into distinct areas of the search space relative to gradient descent, and also consequently to networks with distinct properties. This unique robustness-seeking property, and its consequences for optimization, are demonstrated in several domains. They include humanoid locomotion, where networks from policy gradient-based reinforcement learning are significantly less robust to parameter perturbation than ES-based policies solving the same task. While the implications of such robustness and robustness-seeking remain open to further study, this work's main contribution is to highlight such differences and their potential importance

    A Generalization of the Ramanujan Polynomials and Plane Trees

    Full text link
    Generalizing a sequence of Lambert, Cayley and Ramanujan, Chapoton has recently introduced a polynomial sequence Q_n:=Q_n(x,y,z,t) defined by Q_1=1, Q_{n+1}=[x+nz+(y+t)(n+y\partial_y)]Q_n. In this paper we prove Chapoton's conjecture on the duality formula: Q_n(x,y,z,t)=Q_n(x+nz+nt,y,-t,-z), and answer his question about the combinatorial interpretation of Q_n. Actually we give combinatorial interpretations of these polynomials in terms of plane trees, half-mobile trees, and forests of plane trees. Our approach also leads to a general formula that unifies several known results for enumerating trees and plane trees.Comment: 20 pages, 2 tables, 8 figures, see also http://math.univ-lyon1.fr/~gu
    • …
    corecore